11 research outputs found

    Explaining the stellar initial mass function with the theory of spatial networks

    Full text link
    The distributions of stars and prestellar cores by mass (initial and dense core mass functions, IMF/DCMF) are among the key factors regulating star formation and are the subject of detailed theoretical and observational studies. Results from numerical simulations of star formation qualitatively resemble an observed mass function, a scale-free power law with a sharp decline at low masses. However, most analytic IMF theories critically depend on the empirically chosen input spectrum of mass fluctuations which evolve into dense cores and, subsequently, stars, and on the scaling relation between the amplitude and mass of a fluctuation. Here we propose a new approach exploiting the techniques from the field of network science. We represent a system of dense cores accreting gas from the surrounding diffuse interstellar medium (ISM) as a spatial network growing by preferential attachment and assume that the ISM density has a self-similar fractal distribution following the Kolmogorov turbulence theory. We effectively combine gravoturbulent and competitive accretion approaches and predict the accretion rate to be proportional to the dense core mass: dM/dtMdM/dt \propto M. Then we describe the dense core growth and demonstrate that the power-law core mass function emerges independently of the initial distribution of density fluctuations by mass. Our model yields a power law solely defined by the fractal dimensionalities of the ISM and accreting gas. With a proper choice of the low-mass cut-off, it reproduces observations over three decades in mass. We also rule out a low-mass star dominated "bottom-heavy" IMF in a single star-forming region.Comment: 8 pages, 5 figures, v2 matches the published versio

    Statistical Physics of Design

    Full text link
    Modern life increasingly relies on complex products that perform a variety of functions. The key difficulty of creating such products lies not in the manufacturing process, but in the design process. However, design problems are typically driven by multiple contradictory objectives and different stakeholders, have no obvious stopping criteria, and frequently prevent construction of prototypes or experiments. Such ill-defined, or "wicked" problems cannot be "solved" in the traditional sense with optimization methods. Instead, modern design techniques are focused on generating knowledge about the alternative solutions in the design space. In order to facilitate such knowledge generation, in this dissertation I develop the "Systems Physics" framework that treats the emergent structures within the design space as physical objects that interact via quantifiable forces. Mathematically, Systems Physics is based on maximal entropy statistical mechanics, which allows both drawing conceptual analogies between design problems and collective phenomena and performing numerical calculations to gain quantitative understanding. Systems Physics operates via a Model-Compute-Learn loop, with each step refining our thinking of design problems. I demonstrate the capabilities of Systems Physics in two very distinct case studies: Naval Engineering and self-assembly. For the Naval Engineering case, I focus on an established problem of arranging shipboard systems within the available hull space. I demonstrate the essential trade-off between minimizing the routing cost and maximizing the design flexibility, which can lead to abrupt phase transitions. I show how the design space can break into several locally optimal architecture classes that have very different robustness to external couplings. I illustrate how the topology of the shipboard functional network enters a tight interplay with the spatial constraints on placement. For the self-assembly problem, I show that the topology of self-assembled structures can be reliably encoded in the properties of the building blocks so that the structure and the blocks can be jointly designed. The work presented here provides both conceptual and quantitative advancements. In order to properly port the language and the formalism of statistical mechanics to the design domain, I critically re-examine such foundational ideas as system-bath coupling, coarse graining, particle distinguishability, and direct and emergent interactions. I show that the design space can be packed into a special information structure, a tensor network, which allows seamless transition from graphical visualization to sophisticated numerical calculations. This dissertation provides the first quantitative treatment of the design problem that is not reduced to the narrow goals of mathematical optimization. Using statistical mechanics perspective allows me to move beyond the dichotomy of "forward" and "inverse" design and frame design as a knowledge generation process instead. Such framing opens the way to further studies of the design space structures and the time- and path-dependent phenomena in design. The present work also benefits from, and contributes to the philosophical interpretations of statistical mechanics developed by the soft matter community in the past 20 years. The discussion goes far beyond physics and engages with literature from materials science, naval engineering, optimization problems, design theory, network theory, and economic complexity.PHDPhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163133/1/aklishin_1.pd

    Data-Induced Interactions of Sparse Sensors

    Full text link
    Large-dimensional empirical data in science and engineering frequently has low-rank structure and can be represented as a combination of just a few eigenmodes. Because of this structure, we can use just a few spatially localized sensor measurements to reconstruct the full state of a complex system. The quality of this reconstruction, especially in the presence of sensor noise, depends significantly on the spatial configuration of the sensors. Multiple algorithms based on gappy interpolation and QR factorization have been proposed to optimize sensor placement. Here, instead of an algorithm that outputs a singular "optimal" sensor configuration, we take a thermodynamic view to compute the full landscape of sensor interactions induced by the training data. The landscape takes the form of the Ising model in statistical physics, and accounts for both the data variance captured at each sensor location and the crosstalk between sensors. Mapping out these data-induced sensor interactions allows combining them with external selection criteria and anticipating sensor replacement impacts.Comment: 17 RevTeX pages, 10 figure

    No Free Lunch for Avoiding Clustering Vulnerabilities in Distributed Systems

    Full text link
    Emergent design failures are ubiquitous in complex systems, and often arise when system elements cluster. Approaches to systematically reduce clustering could improve a design's resilience, but reducing clustering is difficult if it is driven by collective interactions among design elements. Here, we use techniques from statistical physics to identify mechanisms by which spatial clusters of design elements emerge in complex systems modelled by heterogeneous networks. We find that, in addition to naive, attraction-driven clustering, heterogeneous networks can exhibit emergent, repulsion-driven clustering. We draw quantitative connections between our results on a model system in naval engineering to entropy-driven phenomena in nanoscale self-assembly, and give a general argument that the clustering phenomena we observe should arise in many distributed systems. We identify circumstances under which generic design problems will exhibit trade-offs between clustering and uncertainty in design objectives, and we present a framework to identify and quantify trade-offs to manage clustering vulnerabilities.Comment: 20 pages, 5 figure

    Human Learning of Hierarchical Graphs

    Full text link
    Humans are constantly exposed to sequences of events in the environment. Those sequences frequently evince statistical regularities, such as the probabilities with which one event transitions to another. Collectively, inter-event transition probabilities can be modeled as a graph or network. Many real-world networks are organized hierarchically and understanding how humans learn these networks is an ongoing aim of current investigations. While much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We investigate how humans learn hierarchical graphs of the Sierpi\'nski family using computer simulations and behavioral laboratory experiments. We probe the mental estimates of transition probabilities via the surprisal effect: a phenomenon in which humans react more slowly to less expected transitions, such as those between communities or modules in the network. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions. Surprisal effects at coarser levels of the hierarchy are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=100100), we replicate our predictions by detecting a surprisal effect at the finer-level of the hierarchy but not at the coarser-level of the hierarchy. To further explain our findings, we evaluate the presence of a trade-off in learning, whereby humans who learned the finer-level of the hierarchy better tended to learn the coarser-level worse, and vice versa. Our study elucidates the processes by which humans learn hierarchical sequential events. Our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning.Comment: 22 pages, 10 figures, 1 tabl

    Learning Dynamic Graphs, Too Slow

    Full text link
    The structure of knowledge is commonly described as a network of key concepts and semantic relations between them. A learner of a particular domain can discover this network by navigating the nodes and edges presented by instructional material, such as a textbook, workbook, or other text. While over a long temporal period such exploration processes are certain to discover the whole connected network, little is known about how the learning is affected by the dual pressures of finite study time and human mental errors. Here we model the learning of linear algebra textbooks with finite length random walks over the corresponding semantic networks. We show that if a learner does not keep up with the pace of material presentation, the learning can be an order of magnitude worse than it is in the asymptotic limit. Further, we find that this loss is compounded by three types of mental errors: forgetting, shuffling, and reinforcement. Broadly, our study informs the design of teaching materials from both structural and temporal perspectives.Comment: 29 RevTeX pages, 13 figure
    corecore